| Use Keras experiment to transfer style | |
|---|---|
This notebook contains the steps and code required to demonstrate style transfer techique using Watson Machine Learning Service. This notebook introduces commands for getting data, training_definition persistance to Watson Machine Learning repository and model training.
Some familiarity with Python is helpful. This notebook uses Python 3 and Watson Studio environments.
In this notebook you learn to work with Watson Machine Learning experiments to train Deep Learning models (Keras).
Before you use the sample code in this notebook, you must perform the following setup tasks:
Create new credentials with HMAC:
Add the inline configuration parameter: { <a href= "https://console.bluemix.net/docs/services/cloud-object-storage/hmac/credentials.html#using-hmac-credentials" target="_blank" rel="noopener no referrer">"HMAC"</a>:true}, click Add.
This configuration parameter adds the following section to the instance credentials, (for use later in this notebook):
"cos_hmac_keys": {
"access_key_id": "-------",
"secret_access_key": "-------"
}
In this section:
Import the Boto library, which allows Python developers to manage COS.
# Import the boto library
import ibm_boto3
from ibm_botocore.client import Config
import os
import json
import warnings
import urllib
import time
warnings.filterwarnings('ignore')
Authenticate to COS and define the endpoint you will use.
Enter your COS credentials in the following cell. You can find these credentials in your COS instance dashboard under the Service credentials tab as described in the set up section.
Go to the Endpoint tab in the COS instance's dashboard to get the endpoint information, for example: s3-api.us-geo.objectstorage.softlayer.net.
# Enter your COS credentials.
cos_credentials = {
"apikey": "***",
"cos_hmac_keys": {
"access_key_id": "***",
"secret_access_key": "***"
},
"endpoints": "https://cos-service.bluemix.net/endpoints",
"iam_apikey_description": "***",
"iam_apikey_name": "***",
"iam_role_crn": "crn:v1:bluemix:public:iam::::serviceRole:Writer",
"iam_serviceid_crn": "***",
"resource_instance_id": "***"
}
api_key = cos_credentials['apikey']
service_instance_id = cos_credentials['resource_instance_id']
auth_endpoint = 'https://iam.bluemix.net/oidc/token'
# Enter your Endpoint information.
service_endpoint = 'https://s3-api.us-geo.objectstorage.softlayer.net'
Create the Boto resource by providing type, endpoint_url and credentials.
cos = ibm_boto3.resource('s3',
ibm_api_key_id=api_key,
ibm_service_instance_id=service_instance_id,
ibm_auth_endpoint=auth_endpoint,
config=Config(signature_version='oauth'),
endpoint_url=service_endpoint)
Create the buckets you will use to store training data and training results.
Note: Bucket names must be unique.
# Create two buckets, style-data-example and style-results-example
buckets = ['style-data-example-2', 'style-results-example-2']
for bucket in buckets:
if not cos.Bucket(bucket) in cos.buckets.all():
print('Creating bucket "{}"...'.format(bucket))
try:
cos.create_bucket(Bucket=bucket)
except ibm_boto3.exceptions.ibm_botocore.client.ClientError as e:
print('Error: {}.'.format(e.response['Error']['Message']))
You have now created two new buckets:
style-data-example-2style-results-example-2Display a list of buckets for your COS instance to verify that the buckets were created.
# Display the buckets
print(list(cos.buckets.all()))
Download your training data and upload them to the 'training-data' bucket. Then, create a list of links for the training dataset.
The following code snippet creates the STYLE_DATA folder and downloads the files from the links to the folder.
Tip: First, use the !pip install wget command to install the wget library:
!pip install wget
import wget, os
# Create folder
data_dir = 'STYLE_DATA'
if not os.path.isdir(data_dir):
os.mkdir(data_dir)
links = ['https://github.com/fchollet/deep-learning-models/releases/download/v0.1/vgg19_weights_tf_dim_ordering_tf_kernels_notop.h5',
'https://upload.wikimedia.org/wikipedia/commons/thumb/e/ea/Van_Gogh_-_Starry_Night_-_Google_Art_Project.jpg/1513px-Van_Gogh_-_Starry_Night_-_Google_Art_Project.jpg',
'https://upload.wikimedia.org/wikipedia/commons/5/52/Krak%C3%B3w_239a.jpg',
'https://upload.wikimedia.org/wikipedia/commons/3/3f/Kandinsky%2C_Lyrisches.jpg']
# Download the links to the folder
for i in range(len(links)):
if 'Gogh' in links[i]:
filepath = os.path.join(data_dir, 'van_gogh.jpg')
elif 'Krak' in links[i]:
filepath = os.path.join(data_dir, 'krakow.jpg')
elif 'Kandinsky' in links[i]:
filepath = os.path.join(data_dir, 'kandinsky.jpg')
else:
filepath = os.path.join(data_dir, os.path.join(links[i].split('/')[-1]))
if not os.path.isfile(filepath):
print(links[i])
urllib.request.urlretrieve(links[i], filepath)
# List the files in the STYLE_DATA folder
!ls STYLE_DATA
from IPython.display import Image
Image(filename=os.path.join(data_dir, 'krakow.jpg'), width=1000)
Image(filename=os.path.join(data_dir, 'van_gogh.jpg'), width=500)
Image(filename=os.path.join(data_dir, 'kandinsky.jpg'), width=600)
Upload the data files to the created buckets.
bucket_name = buckets[0]
bucket_obj = cos.Bucket(bucket_name)
for filename in os.listdir(data_dir):
with open(os.path.join(data_dir, filename), 'rb') as data:
bucket_obj.upload_file(os.path.join(data_dir, filename), filename)
print('{} is uploaded.'.format(filename))
Let's see the list of all the buckets and their contents.
for obj in bucket_obj.objects.all():
print('Object key: {}'.format(obj.key))
print('Object size (kb): {}'.format(obj.size/1024))
You are done with COS, and you are now ready to train your model!
Load the libraries you need.
import urllib3, requests, json, base64, time, os
Authenticate to the Watson Machine Learning (WML) service on IBM Cloud.
Tip: Authentication information (your credentials) can be found in the Service credentials tab of the service instance that you created on IBM Cloud. If there are no credentials listed for your instance in Service credentials, click New credential (+) and enter the information required to generate new authentication information.
Action: Enter your WML service instance credentials here.
wml_credentials = {
"instance_id": "***"
"password": "***",
"url": "https://ibm-watson-ml.mybluemix.net",
"username": "***",
}
watson-machine-learning-client from pypi.¶!rm -rf $PIP_BUILD/watson-machine-learning-client
!pip install --upgrade watson-machine-learning-client
watson-machine-learning-client and authenticate to the service instance.¶from watson_machine_learning_client import WatsonMachineLearningAPIClient
client = WatsonMachineLearningAPIClient(wml_credentials)
print(client.version)
Hint: The final effect depends on number of iterations, and that the number of iterations impacts the training time.
#Set the number of iterations.
iters = 1
model_definition_1_metadata = {
client.repository.DefinitionMetaNames.NAME: "style transfer van gogh",
client.repository.DefinitionMetaNames.FRAMEWORK_NAME: "tensorflow",
client.repository.DefinitionMetaNames.FRAMEWORK_VERSION: "1.5",
client.repository.DefinitionMetaNames.RUNTIME_NAME: "python",
client.repository.DefinitionMetaNames.RUNTIME_VERSION: "3.5",
client.repository.DefinitionMetaNames.EXECUTION_COMMAND: "python style_transfer.py krakow.jpg van_gogh.jpg krakow --iter " + str(iters)
}
model_definition_2_metadata = {
client.repository.DefinitionMetaNames.NAME: "style transfer kandinsky",
client.repository.DefinitionMetaNames.FRAMEWORK_NAME: "tensorflow",
client.repository.DefinitionMetaNames.FRAMEWORK_VERSION: "1.5",
client.repository.DefinitionMetaNames.RUNTIME_NAME: "python",
client.repository.DefinitionMetaNames.RUNTIME_VERSION: "3.5",
client.repository.DefinitionMetaNames.EXECUTION_COMMAND: "python style_transfer.py krakow.jpg kandinsky.jpg krakow --iter " + str(iters)
}
!rm -rf STYLE.zip
filename_definition = 'STYLE.zip'
if not os.path.isfile(filename_definition):
!wget https://github.com/pmservice/wml-sample-models/raw/master/keras/style/definition/STYLE.zip
!ls STYLE.zip
definition_details = client.repository.store_definition(filename_definition, model_definition_1_metadata)
definition_url = client.repository.get_definition_url(definition_details)
definition_uid = client.repository.get_definition_uid(definition_details)
print(definition_url)
definition_2_details = client.repository.store_definition(filename_definition, model_definition_2_metadata)
definition_2_url = client.repository.get_definition_url(definition_2_details)
definition_2_uid = client.repository.get_definition_uid(definition_2_details)
print(definition_2_url)
client.repository.list_definitions()
Get a list of supported configuration parameters.
client.repository.ExperimentMetaNames.show()
Create an experiment, which will train two models based on previously stored definitions.
TRAINING_DATA_REFERENCE = {
"connection": {
"endpoint_url": service_endpoint,
"access_key_id": cos_credentials['cos_hmac_keys']['access_key_id'],
"secret_access_key": cos_credentials['cos_hmac_keys']['secret_access_key']
},
"source": {
"bucket": buckets[0],
},
"type": "s3"
}
TRAINING_RESULTS_REFERENCE = {
"connection": {
"endpoint_url": service_endpoint,
"access_key_id": cos_credentials['cos_hmac_keys']['access_key_id'],
"secret_access_key": cos_credentials['cos_hmac_keys']['secret_access_key']
},
"target": {
"bucket": buckets[1],
},
"type": "s3"
}
experiment_metadata = {
client.repository.ExperimentMetaNames.NAME: "STYLE experiment",
client.repository.ExperimentMetaNames.TRAINING_DATA_REFERENCE: TRAINING_DATA_REFERENCE,
client.repository.ExperimentMetaNames.TRAINING_RESULTS_REFERENCE: TRAINING_RESULTS_REFERENCE,
client.repository.ExperimentMetaNames.TRAINING_REFERENCES: [
{
"name": "van gogh - cracow",
"training_definition_url": definition_url,
"compute_configuration": {"name": "k80x4"}
},
{
"name": "kandinsky - cracow",
"training_definition_url": definition_2_url,
"compute_configuration": {"name": "k80x4"}
},
],
}
Store the experiment in the WML repository.
# Store the experiment and display the experiment_uid.
experiment_details = client.repository.store_experiment(meta_props=experiment_metadata)
experiment_uid = client.repository.get_experiment_uid(experiment_details)
print(experiment_uid)
List the stored experiments.
client.repository.list_experiments()
Get the experiment definition details
details = client.repository.get_experiment_details(experiment_uid)
Tip: To run the experiment in the background, set the optional parameter asynchronous=True (or remove it).
experiment_run_details = client.experiments.run(experiment_uid, asynchronous=False)
As you can see, the experiment run has finished.
experiment_run_id = client.experiments.get_run_uid(experiment_run_details)
print(experiment_run_id)
The code in the following cell gets details about a particular experiment run.
run_details = client.experiments.get_run_details(experiment_run_id)
Call client.experiments.get_status(run_uid) to check the experiment run status. This is useful when you run an experiment in the background.
status = client.experiments.get_status(experiment_run_id)
print(status)
Call client.experiments.monitor_logs(run_uid) to monitor the experiment run. This method streams the training logs content to the console.
client.experiments.list_training_runs(experiment_run_id)
As you can see, two training runs completed.
# List the training uids.
training_uids = client.experiments.get_training_uids(experiment_run_details)
print(training_uids)
bucket_name = buckets[1]
bucket_obj = cos.Bucket(bucket_name)
transfered_images = []
for uid in training_uids:
obj = bucket_obj.Object(uid + '/transfered_images/krakow_at_iteration_' + str(iters-1) + '.png')
filename = 'krakow_transfered_' + str(uid) + '.jpg'
transfered_images.append(filename)
with open(filename, 'wb') as data:
obj.download_fileobj(data)
print(filename)
Have a look at the original picture again.
Image(filename=os.path.join(data_dir, 'krakow.jpg'), width=1000)
Display the picture after Van Gogh style has been applied.
Image(filename=transfered_images[0], width=1000)
Display the picture after Kandinsky style has been applied.
Image(filename=transfered_images[1], width=1000)
You successfully completed this notebook! You learned how to use watson-machine-learning-client to run experiments. Check out our:
Lukasz Cmielowski, PhD, is an Automation Architect and Data Scientist at IBM with a track record of developing enterprise-level applications that substantially increases clients' ability to turn data into actionable knowledge.
Copyright © 2018 IBM. This notebook and its source code are released under the terms of the MIT License.